Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
IntroductionThe ‘social brain hypothesis’ proposes that brain development (particularly primates) is driven by social complexity, more than group size. Yet, small insects with minute brains are capable of the most complex social organization in animals - which warrants further attention. Research has focused on highly eusocial hymenopterans with extreme caste specialization and very large colony sizes that have passed social evolutionary points of no return. However, facultatively social insects that form small colonies (< 20 individuals) are likely to provide greater insight on brain selection at the origin-point of social group living. MethodsWe undertake the first neurobiological investigation of the facultatively social allodapine bees (Apidae: Xylocopinae: Allodapini), an exploratory study comparing single- and multi-female colonies ofExoneura angophorae. Using volume as a proxy for neural investment, we measured mushroom body calyces, optic lobes, antennal lobes and whole brains of queens, workers, and single-females to test three theories associating brain development with behavior: social brain hypothesis; distributed cognition hypothesis; sensory environment hypothesis. ResultsMushroom bodies were reduced in subordinate workers, but did not differ between queens and single-females. Workers had larger optic lobes than queens, but did not differ from single-females. There were no differences in antennal lobes or whole brain volume. DiscussionSocial caste, rather than multi-female versus single-female nesting, influenced mushroom body volume in this allodapine bee – counter to both social brain and distributed cognition theories and in alignment with halictine and ceratinine bees that also form small facultatively social colonies. Optic lobe enhancement is likely a response to dietary niche requirements for extra-nidal foraging behavior – which may be a highly plastic trait capable of rapid transition among allodapine and ceratinine bees that conforms with ecological intelligence hypotheses. These broad volumetric trends require further investigations on the functional neural circuitry involved in the aforementioned environmental contexts.more » « lessFree, publicly-accessible full text available June 10, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Free, publicly-accessible full text available February 1, 2026
-
A fundamental notion of distance between train and test distributions from the field of domain adaptation is discrepancy distance. While in general hard to compute, here we provide the first set of provably efficient algorithms for testing localized discrepancy distance, where discrepancy is computed with respect to a fixed output classifier. These results imply a broad set of new, efficient learning algorithms in the recently introduced model of Testable Learning with Distribution Shift (TDS learning) due to Klivans et al. (2023).Our approach generalizes and improves all prior work on TDS learning: (1) we obtain universal learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits. Our methods further extend to semi-parametric settings and imply the first positive results for low-dimensional convex sets. Additionally, we separate learning and testing phases and obtain algorithms that run in fully polynomial time at test time.more » « lessFree, publicly-accessible full text available December 10, 2025
-
Abstract Protein language models, like the popular ESM2, are widely used tools for extracting evolution-based protein representations and have achieved significant success on downstream biological tasks. Representations based on sequence and structure models, however, show significant performance differences depending on the downstream task. A major open problem is to obtain representations that best capture both the evolutionary and structural properties of proteins in general. Here we introduceImplicitStructureModel(ISM), a sequence-only input model with structurally-enriched representations that outperforms state-of-the-art sequence models on several well-studied benchmarks including mutation stability assessment and structure prediction. Our key innovations are a microenvironment-based autoencoder for generating structure tokens and a self-supervised training objective that distills these tokens into ESM2’s pre-trained model. We have madeISM’s structure-enriched weights easily available: integrating ISM into any application using ESM2 requires changing only a single line of code. Our code is available athttps://github.com/jozhang97/ISM.more » « lessFree, publicly-accessible full text available November 11, 2025
-
Free, publicly-accessible full text available December 1, 2025
-
We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution D, unlabeled samples from test distribution D’ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between D and D’. These distances, however, seem difficult to compute and do not lead to efficient algorithms. We depart from this paradigm and define a new model called testable learning with distribution shift, where we can obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution. In this model, a learner outputs a classifier with low test error whenever samples from D and D’ pass an associated test; moreover, the test must accept (with high probability) if the marginal of D equals the marginal of D’. We give several positive results for learning well-studied concept classes such as halfspaces, intersections of halfspaces, and decision trees when the marginal of D is Gaussian or uniform on the hypercube. Prior to our work, no efficient algorithms for these basic cases were known without strong assumptions on D’. For halfspaces in the realizable case (where there exists a halfspace consistent with both D and D’), we combine a moment-matching approach with ideas from active learning to simulate an efficient oracle for estimating disagreement regions. To extend to the non-realizable setting, we apply recent work from testable (agnostic) learning. More generally, we prove that any function class with low-degree L2-sandwiching polynomial approximators can be learned in our model. Since we require L2- sandwiching (instead of the usual L1 loss), we cannot directly appeal to convex duality and instead apply constructions from the pseudorandomness literature to obtain the required approximators. We also provide lower bounds to show that the guarantees we obtain on the performance of our output hypotheses are best possible up to constant factors, as well as a separation showing that realizable learning in our model is incomparable to (ordinary) agnostic learning.more » « less
-
Abstract Engineering stabilized proteins is a fundamental challenge in the development of industrial and pharmaceutical biotechnologies. We present Stability Oracle: a structure-based graph-transformer framework that achieves SOTA performance on accurately identifying thermodynamically stabilizing mutations. Our framework introduces several innovations to overcome well-known challenges in data scarcity and bias, generalization, and computation time, such as: Thermodynamic Permutations for data augmentation, structural amino acid embeddings to model a mutation with a single structure, a protein structure-specific attention-bias mechanism that makes transformers a viable alternative to graph neural networks. We provide training/test splits that mitigate data leakage and ensure proper model evaluation. Furthermore, to examine our data engineering contributions, we fine-tune ESM2 representations (Prostata-IFML) and achieve SOTA for sequence-based models. Notably, Stability Oracle outperforms Prostata-IFML even though it was pretrained on 2000X less proteins and has 548X less parameters. Our framework establishes a path for fine-tuning structure-based transformers to virtually any phenotype, a necessary task for accelerating the development of protein-based biotechnologies.more » « lessFree, publicly-accessible full text available December 1, 2025
-
This paper investigates the problem of computing discrepancy distance, a key notion of distance between training and test distributions in domain adaptation. While computing discrepancy distance is generally hard, the authors present the first provably efficient algorithms for testing localized discrepancy distance, where the measure is computed with respect to a fixed output classifier. These results lead to a new family of efficient learning algorithms under the recently introduced Testable Learning with Distribution Shift (TDS learning) framework (Klivans et al., 2023). The authors’ contributions include: (1) universal learners that succeed simultaneously across a wide range of test distributions, (2) algorithms achieving near-optimal error rates, and (3) exponential improvements for constant-depth circuits. Their methods also extend to semi-parametric settings and yield the first positive results for low-dimensional convex sets. Furthermore, by separating learning and testing phases, the authors provide algorithms that run in fully polynomial time at test time.more » « less
-
We give the first efficient algorithm for learning halfspaces in the testable learning model recently defined by Rubinfeld and Vasilyan [2022]. In this model, a learner certifies that the accuracy of its output hypothesis is near optimal whenever the training set passes an associated test, and training sets drawn from some target distribution must pass the test. This model is more challenging than distribution-specific agnostic or Massart noise models where the learner is allowed to fail arbitrarily if the distributional assumption does not hold. We consider the setting where the target distribution is the standard Gaussian in dimensions and the label noise is either Massart or adversarial (agnostic). For Massart noise, our tester-learner runs in polynomial time and outputs a hypothesis with (information-theoretically optimal) error (and extends to any fixed strongly log-concave target distribution). For adversarial noise, our tester-learner obtains error in polynomial time. Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution. Here we develop new tests of independent interest that make critical use of the labels and combine them with the moment-matching approach of Gollakota et al. [2022]. This enables us to implement a testable variant of the algorithm of Diakonikolas et al. [2020a, 2020b] for learning noisy halfspaces using nonconvex SGD.more » « less
An official website of the United States government

Full Text Available